1,223 research outputs found

    Reverse Sensitivity Analysis for Risk Modelling

    Full text link
    We consider the problem where a modeller conducts sensitivity analysis of a model consisting of random input factors, a corresponding random output of interest, and a baseline probability measure. The modeller seeks to understand how the model (the distribution of the input factors as well as the output) changes under a stress on the output's distribution. Specifically, for a stress on the output random variable, we derive the unique stressed distribution of the output that is closest in the Wasserstein distance to the baseline output's distribution and satisfies the stress. We further derive the stressed model, including the stressed distribution of the inputs, which can be calculated in a numerically efficient way from a set of baseline Monte Carlo samples. The proposed reverse sensitivity analysis framework is model-free and allows for stresses on the output such as (a) the mean and variance, (b) any distortion risk measure including the Value-at-Risk and Expected-Shortfall, and (c) expected utility type constraints, thus making the reverse sensitivity analysis framework suitable for risk models

    Reverse sensitivity testing: What does it take to break the model?

    Get PDF
    Sensitivity analysis is an important component of model building, interpretation and validation. A model comprises a vector of random input factors, an aggregation function mapping input factors to a random output, and a (baseline) probability measure. A risk measure, such as Value-at-Risk and Expected Shortfall, maps the distribution of the output to the real line. As is common in risk management, the value of the risk measure applied to the output is a decision variable. Therefore, it is of interest to associate a critical increase in the risk measure to specific input factors. We propose a global and model-independent framework, termed ‘reverse sensitivity testing’, comprising three steps: (a) an output stress is specified, corresponding to an increase in the risk measure(s); (b) a (stressed) probability measure is derived, minimising the Kullback-Leibler divergence with respect to the baseline probability, under constraints generated by the output stress; (c) changes in the distributions of input factors are evaluated. We argue that a substantial change in the distribution of an input factor corresponds to high sensitivity to that input and introduce a novel sensitivity measure to formalise this insight. Implementation of reverse sensitivity testing in a Monte-Carlo setting can be performed on a single set of input/output scenarios, simulated under the baseline model. Thus the approach circumvents the need for additional computationally expensive evaluations of the aggregation function. We illustrate the proposed approach through a numerical example of a simple insurance portfolio and a model of a London Insurance Market portfolio used in industry

    Euler allocations in the presence of non-linear reinsurance: comment on Major (2018)

    Get PDF
    Major (2018) discusses Euler/Aumann-Shapley allocations for non-linear portfolios. He argues convincingly that many (re)insurance portfolios, while non-linear, are nevertheless positively homogeneous, owing to the way that deductibles and limits are typically set. For such non-linear but homogeneous portfolio structures, he proceeds with defining and studying a particular type of capital allocation. In this comment, we build on Major's (2018) insights but take a slightly different direction, to consider Euler capital allocations for distortion risk measures applied to homogeneous portfolios. Thus, the important problem of capital allocation in portfolios with non-linear reinsurance is solved

    Stressing Dynamic Loss Models

    Full text link
    Stress testing, and in particular, reverse stress testing, is a prominent exercise in risk management practice. Reverse stress testing, in contrast to (forward) stress testing, aims to find an alternative but plausible model such that under that alternative model, specific adverse stresses (i.e. constraints) are satisfied. Here, we propose a reverse stress testing framework for dynamic models. Specifically, we consider a compound Poisson process over a finite time horizon and stresses composed of expected values of functions applied to the process at the terminal time. We then define the stressed model as the probability measure under which the process satisfies the constraints and which minimizes the Kullback-Leibler divergence to the reference compound Poisson model. We solve this optimization problem, prove existence and uniqueness of the stressed probability measure, and provide a characterization of the Radon-Nikodym derivative from the reference model to the stressed model. We find that under the stressed measure, the intensity and the severity distribution of the process depend on time and the state space. We illustrate the dynamic stress testing by considering stresses on VaR and both VaR and CVaR jointly and provide illustrations of how the stochastic process is altered under these stresses. We generalize the framework to multivariate compound Poisson processes and stresses at times other than the terminal time. We illustrate the applicability of our framework by considering "what if" scenarios, where we answer the question: What is the severity of a stress on a portfolio component at an earlier time such that the aggregate portfolio exceeds a risk threshold at the terminal time? Moreover, for general constraints, we provide a simulation algorithm to simulate sample paths under the stressed measure

    Differential Sensitivity in Discontinuous Models

    Full text link
    Differential sensitivity measures provide valuable tools for interpreting complex computational models used in applications ranging from simulation to algorithmic prediction. Taking the derivative of the model output in direction of a model parameter can reveal input-output relations and the relative importance of model parameters and input variables. Nonetheless, it is unclear how such derivatives should be taken when the model function has discontinuities and/or input variables are discrete. We present a general framework for addressing such problems, considering derivatives of quantile-based output risk measures, with respect to distortions to random input variables (risk factors), which impact the model output through step-functions. We prove that, subject to weak technical conditions, the derivatives are well-defined and derive the corresponding formulas. We apply our results to the sensitivity analysis of compound risk models and to a numerical study of reinsurance credit risk in a multi-line insurance portfolio

    Optimal Robust Reinsurance with Multiple Insurers

    Full text link
    We study a reinsurer who faces multiple sources of model uncertainty. The reinsurer offers contracts to nn insurers whose claims follow different compound Poisson processes. As the reinsurer is uncertain about the insurers' claim severity distributions and frequencies, they design reinsurance contracts that maximise their expected wealth subject to an entropy penalty. Insurers meanwhile seek to maximise their expected utility without ambiguity. We solve this continuous-time Stackelberg game for general reinsurance contracts and find that the reinsurer prices under a distortion of the barycentre of the insurers' models. We apply our results to proportional reinsurance and excess-of-loss reinsurance contracts, and illustrate the solutions numerically. Furthermore, we solve the related problem where the reinsurer maximises, still under ambiguity, their expected utility and compare the solutions

    Relationship between molecular connectivity and carcinogenic activity: a confirmation with a new software program based on graph theory.

    Get PDF
    For a database of 826 chemicals tested for carcinogenicity, we fragmented the structural formula of the chemicals into all possible contiguous-atom fragments with size between two and eight (nonhydrogen) atoms. The fragmentation was obtained using a new software program based on graph theory. We used 80% of the chemicals as a training set and 20% as a test set. The two sets were obtained by random sorting. From the training sets, an average (8 computer runs with independently sorted chemicals) of 315 different fragments were significantly (p < 0.125) associated with carcinogenicity or lack thereof. Even using this relatively low level of statistical significance, 23% of the molecules of the test sets lacked significant fragments. For 77% of the molecules of the test sets, we used the presence of significant fragments to predict carcinogenicity. The average level of accuracy of the predictions in the test sets was 67.5%. Chemicals containing only positive fragments were predicted with an accuracy of 78.7%. The level of accuracy was around 60% for chemicals characterized by contradictory fragments or only negative fragments. In a parallel manner, we performed eight paired runs in which carcinogenicity was attributed randomly to the molecules of the training sets. The fragments generated by these pseudo-training sets were devoid of any predictivity in the corresponding test sets. Using an independent software program, we confirmed (for the complex biological endpoint of carcinogenicity) the validity of a structure-activity relationship approach of the type proposed by Klopman and Rosenkranz with their CASE program

    Effect of cyclic loading on hydrogen diffusion in low carbon steels

    Get PDF
    Carbon steels or low-alloyed steels may be affected by damaging phenomena due to Hydrogen Embrittlement (HE), which is a particular form of Environmental Assisted Cracking (EAC). The insurgence of HE depends on the intrinsic susceptibility of the steel, the applied stress, and the concentration of hydrogen inside the metal. It occurs by a mechanism of absorption and subsequent diffusion of atomic hydrogen through the metal lattice. On steels with a yield strength lower than 700 MPa, HE occurs in the plastic deformation field, in the presence of dynamic loading at slow strain rates or cyclic fatigue loading at very low frequencies. Although several important studies were carried out on the effect of loading conditions on hydrogen diffusion into the metal and HE mechanism, HE phenomena are not fully understood. In this work, the effect of the application of cyclic loads on hydrogen diffusion parameters was studied both in the elastic and in the plastic deformation field. The influence of mean load and amplitude was analyzed. Hydrogen permeation tests were performed on API 5L X65 steel, in accordance with ISO 17081:2014. The specimen behaved as bi-electrode between the two compartments of a Devanathan-Stachurski cell. The anodic side of the specimen was polarized at +340 mV vs Ag/AgCl in a 0.1 M NaOH aerated solution, while the cathodic compartment was filled with an aerated borate solution. A controller enabled temperature adjustment at 20±0.5°C. Once the passivity current registered in the anodic side reached values of 0.05 µA/cm2, a cathodic current density of 0.50 mA/cm2 was applied to charging cathodic side. The study included tests with sine waveform cycling loading, with a maximum level equal to 110% TYS, at a frequency of 10-2 Hz. The results confirmed the values of hydrogen diffusion coefficient usually indicated for low-alloyed steels with a sorbitic microstructure. Strain hardened specimens - stretched above yield strength - showed an increase of steady state current and an extension of the time lag, denoting a slight decrease in the apparent hydrogen diffusion coefficient due to traps effect in the cold deformed steel matrix. Under cyclic loading, an instantaneous peak of current with a subsequent significant transient decrease occurred after cyclic load application, whereas no relevant variation of permeation curve compared to unloaded specimens was observed if specimens were already loaded before hydrogen charging. The instantaneous current peak reached values much higher than the steady state current. This is ascribed to the rupture of the passive film – caused by loading – and its subsequent reformation; in fact, this can also be noted during tests performed on specimens without hydrogen permeation. The following transient, in which the permeation current decreases below the steady state and then returns to it, denotes a relevant trapping effect that causes the instantaneous reduction of mobile hydrogen concentration in the lattice. This becomes more significant for loads closer and closer to the yield strength, mainly beyond this, and can only be noted at the first loading step. Subsequent unloading and loading step at the same mean value showed no transient in the permeation curren

    IMU-based human activity recognition and payload classification for low-back exoskeletons

    Get PDF
    Nowadays, work-related musculoskeletal disorders have a drastic impact on a large part of the world population. In particular, low-back pain counts as the leading cause of absence from work in the industrial sector. Robotic exoskeletons have great potential to improve industrial workers’ health and life quality. Nonetheless, current solutions are often limited by sub-optimal control systems. Due to the dynamic environment in which they are used, failure to adapt to the wearer and the task may be limiting exoskeleton adoption in occupational scenarios. In this scope, we present a deep-learning-based approach exploiting inertial sensors to provide industrial exoskeletons with human activity recognition and adaptive payload compensation. Inertial measurement units are easily wearable or embeddable in any industrial exoskeleton. We exploited Long-Short Term Memory networks both to perform human activity recognition and to classify the weight of lifted objects up to 15 kg. We found a median F1 score of 90.80 % (activity recognition) and 87.14 % (payload estimation) with subject-specific models trained and tested on 12 (6M-6F) young healthy volunteers. We also succeeded in evaluating the applicability of this approach with an in-lab real-time test in a simulated target scenario. These high-level algorithms may be useful to fully exploit the potential of powered exoskeletons to achieve symbiotic human–robot interaction
    • …
    corecore